970 research outputs found

    Error Corrective Boosting for Learning Fully Convolutional Networks with Limited Data

    Full text link
    Training deep fully convolutional neural networks (F-CNNs) for semantic image segmentation requires access to abundant labeled data. While large datasets of unlabeled image data are available in medical applications, access to manually labeled data is very limited. We propose to automatically create auxiliary labels on initially unlabeled data with existing tools and to use them for pre-training. For the subsequent fine-tuning of the network with manually labeled data, we introduce error corrective boosting (ECB), which emphasizes parameter updates on classes with lower accuracy. Furthermore, we introduce SkipDeconv-Net (SD-Net), a new F-CNN architecture for brain segmentation that combines skip connections with the unpooling strategy for upsampling. The SD-Net addresses challenges of severe class imbalance and errors along boundaries. With application to whole-brain MRI T1 scan segmentation, we generate auxiliary labels on a large dataset with FreeSurfer and fine-tune on two datasets with manual annotations. Our results show that the inclusion of auxiliary labels and ECB yields significant improvements. SD-Net segments a 3D scan in 7 secs in comparison to 30 hours for the closest multi-atlas segmentation method, while reaching similar performance. It also outperforms the latest state-of-the-art F-CNN models.Comment: Accepted at MICCAI 201

    Fast Predictive Image Registration

    Full text link
    We present a method to predict image deformations based on patch-wise image appearance. Specifically, we design a patch-based deep encoder-decoder network which learns the pixel/voxel-wise mapping between image appearance and registration parameters. Our approach can predict general deformation parameterizations, however, we focus on the large deformation diffeomorphic metric mapping (LDDMM) registration model. By predicting the LDDMM momentum-parameterization we retain the desirable theoretical properties of LDDMM, while reducing computation time by orders of magnitude: combined with patch pruning, we achieve a 1500x/66x speed up compared to GPU-based optimization for 2D/3D image registration. Our approach has better prediction accuracy than predicting deformation or velocity fields and results in diffeomorphic transformations. Additionally, we create a Bayesian probabilistic version of our network, which allows evaluation of deformation field uncertainty through Monte Carlo sampling using dropout at test time. We show that deformation uncertainty highlights areas of ambiguous deformations. We test our method on the OASIS brain image dataset in 2D and 3D

    The Open-Source Neuroimaging Research Enterprise

    Get PDF
    While brain imaging in the clinical setting is largely a practice of looking at images, research neuroimaging is a quantitative and integrative enterprise. Images are run through complex batteries of processing and analysis routines to generate numeric measures of brain characteristics. Other measures potentially related to brain function – demographics, genetics, behavioral tests, neuropsychological tests – are key components of most research studies. The canonical scanner – PACS – viewing station axis used in clinical practice is therefore inadequate for supporting neuroimaging research. Here, we model the neuroimaging research enterprise as a workflow. The principal components of the workflow include data acquisition, data archiving, data processing and analysis, and data utilization. We also describe a set of open-source applications to support each step of the workflow and the transitions between these steps. These applications include DIGITAL IMAGING AND COMMUNICATIONS IN MEDICINE viewing and storage tools, the EXTENSIBLE NEUROIMAGING ARCHIVE TOOLKIT data archiving and exploration platform, and an engine for running processing/analysis pipelines. The overall picture presented is aimed to motivate open-source developers to identify key integration and communication points for interoperating with complimentary applications

    3DQ: Compact Quantized Neural Networks for Volumetric Whole Brain Segmentation

    Full text link
    Model architectures have been dramatically increasing in size, improving performance at the cost of resource requirements. In this paper we propose 3DQ, a ternary quantization method, applied for the first time to 3D Fully Convolutional Neural Networks (F-CNNs), enabling 16x model compression while maintaining performance on par with full precision models. We extensively evaluate 3DQ on two datasets for the challenging task of whole brain segmentation. Additionally, we showcase our method's ability to generalize on two common 3D architectures, namely 3D U-Net and V-Net. Outperforming a variety of baselines, the proposed method is capable of compressing large 3D models to a few MBytes, alleviating the storage needs in space critical applications.Comment: Accepted to MICCAI 201

    Deep Learning networks with p-norm loss layers for spatial resolution enhancement of 3D medical images

    Get PDF
    Thurnhofer-Hemsi K., López-Rubio E., Roé-Vellvé N., Molina-Cabello M.A. (2019) Deep Learning Networks with p-norm Loss Layers for Spatial Resolution Enhancement of 3D Medical Images. In: Ferrández Vicente J., Álvarez-Sánchez J., de la Paz López F., Toledo Moreo J., Adeli H. (eds) From Bioinspired Systems and Biomedical Applications to Machine Learning. IWINAC 2019. Lecture Notes in Computer Science, vol 11487. Springer, ChamNowadays, obtaining high-quality magnetic resonance (MR) images is a complex problem due to several acquisition factors, but is crucial in order to perform good diagnostics. The enhancement of the resolution is a typical procedure applied after the image generation. State-of-the-art works gather a large variety of methods for super-resolution (SR), among which deep learning has become very popular during the last years. Most of the SR deep-learning methods are based on the min- imization of the residuals by the use of Euclidean loss layers. In this paper, we propose an SR model based on the use of a p-norm loss layer to improve the learning process and obtain a better high-resolution (HR) image. This method was implemented using a three-dimensional convolutional neural network (CNN), and tested for several norms in order to determine the most robust t. The proposed methodology was trained and tested with sets of MR structural T1-weighted images and showed better outcomes quantitatively, in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the restored and the calculated residual images showed better CNN outputs.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    CompNet: Complementary Segmentation Network for Brain MRI Extraction

    Full text link
    Brain extraction is a fundamental step for most brain imaging studies. In this paper, we investigate the problem of skull stripping and propose complementary segmentation networks (CompNets) to accurately extract the brain from T1-weighted MRI scans, for both normal and pathological brain images. The proposed networks are designed in the framework of encoder-decoder networks and have two pathways to learn features from both the brain tissue and its complementary part located outside of the brain. The complementary pathway extracts the features in the non-brain region and leads to a robust solution to brain extraction from MRIs with pathologies, which do not exist in our training dataset. We demonstrate the effectiveness of our networks by evaluating them on the OASIS dataset, resulting in the state of the art performance under the two-fold cross-validation setting. Moreover, the robustness of our networks is verified by testing on images with introduced pathologies and by showing its invariance to unseen brain pathologies. In addition, our complementary network design is general and can be extended to address other image segmentation problems with better generalization.Comment: 8 pages, Accepted to MICCAI 201

    Responsibility loadings for dental services by general dentists

    Get PDF
    Extent: 6p.BACKGROUND: Responsibility loadings determine relative value units of dental services that translate services into a common scale of work effort. The aims of this paper were to elicit responsibility loadings for a subset of dental services and to relate responsibility loadings to ratings of importance of the components of responsibility. METHODS: Responsibility loadings and ratings of components of responsibility were collected using mailed questionnaires from a random sample of Australian private general practice dentists in 2007 (response rate = 77%). RESULTS: Median responsibility loadings were 1.25 for an initial oral examination and for a 3+-surface amalgam restoration, 1.50 for a simple extraction and for root canal obturation (single canal), and 1.75 for subgingival curettage (per quadrant). Across the five services coefficients from a multivariate logit model showed that ratings of importance of knowledge (0.34), dexterity (0.24), physical effort (0.28) and mental effort (0.48) were associated with responsibility loadings (P < 0.05). CONCLUSIONS: The elicited median responsibility loadings showed agreement with previous estimates indicating convergent validity. Components of responsibility were associated with loadings indicating that components can explain and predict responsibility aspects of dental service provision.David S. Brennan and A. John Spence

    A Multi-Armed Bandit to Smartly Select a Training Set from Big Medical Data

    Full text link
    With the availability of big medical image data, the selection of an adequate training set is becoming more important to address the heterogeneity of different datasets. Simply including all the data does not only incur high processing costs but can even harm the prediction. We formulate the smart and efficient selection of a training dataset from big medical image data as a multi-armed bandit problem, solved by Thompson sampling. Our method assumes that image features are not available at the time of the selection of the samples, and therefore relies only on meta information associated with the images. Our strategy simultaneously exploits data sources with high chances of yielding useful samples and explores new data regions. For our evaluation, we focus on the application of estimating the age from a brain MRI. Our results on 7,250 subjects from 10 datasets show that our approach leads to higher accuracy while only requiring a fraction of the training data.Comment: MICCAI 2017 Proceeding

    E11, generalised space-time and equations of motion in four dimensions

    Full text link
    We construct the non-linear realisation of the semi-direct product of E11 and its first fundamental representation at low levels in four dimensions. We include the fields for gravity, the scalars and the gauge fields as well as the duals of these fields. The generalised space-time, upon which the fields depend, consists of the usual coordinates of four dimensional space-time and Lorentz scalar coordinates which belong to the 56-dimensional representation of E7. We demand that the equations of motion are first order in derivatives of the generalised space-time and then show that they are essentially uniquely determined by the properties of the E11 Kac-Moody algebra and its first fundamental representation. The two lowest equations correctly describe the equations of motion of the scalars and the gauge fields once one takes the fields to depend only on the usual four dimensional space-time

    Measurements in two bases are sufficient for certifying high-dimensional entanglement

    Full text link
    High-dimensional encoding of quantum information provides a promising method of transcending current limitations in quantum communication. One of the central challenges in the pursuit of such an approach is the certification of high-dimensional entanglement. In particular, it is desirable to do so without resorting to inefficient full state tomography. Here, we show how carefully constructed measurements in two bases (one of which is not orthonormal) can be used to faithfully and efficiently certify bipartite high-dimensional states and their entanglement for any physical platform. To showcase the practicality of this approach under realistic conditions, we put it to the test for photons entangled in their orbital angular momentum. In our experimental setup, we are able to verify 9-dimensional entanglement for a pair of photons on a 11-dimensional subspace each, at present the highest amount certified without any assumptions on the state.Comment: 11+14 pages, 2+7 figure
    corecore